Conventional cameras capture image irradiance on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel $\rho$-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained for the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on snapshots from various cameras. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression compared to RGB-domain processing. Furthermore, the proposed \r{ho}-Vision generalizes across various camera sensors and different task-specific models. Additional advantages of the proposed $\rho$-Vision that eliminates the ISP are the potential reductions in computations and processing times.
translated by 谷歌翻译
Deep learning (DL)-based tomographic SAR imaging algorithms are gradually being studied. Typically, they use an unfolding network to mimic the iterative calculation of the classical compressive sensing (CS)-based methods and process each range-azimuth unit individually. However, only one-dimensional features are effectively utilized in this way. The correlation between adjacent resolution units is ignored directly. To address that, we propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features. Guided by the deep unfolding methodology, a two-dimensional deep unfolding imaging network is constructed. On the basis of it, we add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively. Meanwhile, to train the proposed multifeature-based imaging network, we construct a tomoSAR simulation dataset consisting entirely of simulation data of buildings. Experiments verify the effectiveness of the model. Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
translated by 谷歌翻译
Benefiting from a relatively larger aperture's angle, and in combination with a wide transmitting bandwidth, near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots. Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises, hindering the information retrieval of the target. To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc. Thus, they achieve limited restoration performance in terms of the target's shape, especially for complex targets. To address these issues, a preliminary study is conducted on restoration with the recent promising deep learning inverse technique in this work. We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered. Adhering to it, a model-based deep learning network is designed to restore the image. A simulated degraded image dataset from multiple complex target models is constructed to validate the network. All the images are formulated using the electromagnetic simulation tool. Experiments on the dataset reveal their effectiveness. Compared with current methods, superior performance is achieved regarding the target's shape and energy estimation.
translated by 谷歌翻译
This work focuses on 3D Radar imaging inverse problems. Current methods obtain undifferentiated results that suffer task-depended information retrieval loss and thus don't meet the task's specific demands well. For example, biased scattering energy may be acceptable for screen imaging but not for scattering diagnosis. To address this issue, we propose a new task-oriented imaging framework. The imaging principle is task-oriented through an analysis phase to obtain task's demands. The imaging model is multi-cognition regularized to embed and fulfill demands. The imaging method is designed to be general-ized, where couplings between cognitions are decoupled and solved individually with approximation and variable-splitting techniques. Tasks include scattering diagnosis, person screen imaging, and parcel screening imaging are given as examples. Experiments on data from two systems indicate that the pro-posed framework outperforms the current ones in task-depended information retrieval.
translated by 谷歌翻译
实时动态环境感知对于拥挤空间的自动机器人至关重要。尽管流行的基于体素的映射方法可以有效地用任意复杂的形状代表3D障碍,但它们几乎无法区分静态和动态障碍,从而导致避免障碍物的性能有限。尽管在自动驾驶中存在大量基于学习的动态障碍检测算法,但四轮驱动器的有限计算资源无法使用这些方法实现实时性能。为了解决这些问题,我们为使用RGB-D摄像机提出了一个实时动态障碍物跟踪和映射系统,以避免四肢障碍物。拟议的系统首先利用带有占用体素图的深度图像来生成潜在的动态障碍区域作为建议。通过障碍区域建议,Kalman滤波器和我们的连续性过滤器将应用于跟踪每个动态障碍物。最后,使用追踪动态障碍的状态基于马尔可夫链提出了环境感知的轨迹预测方法。我们使用定制的四轮驱动器和导航计划者实施了建议的系统。仿真和物理实验表明,我们的方法可以成功地跟踪和代表动态环境中的障碍,并安全地避免障碍。
translated by 谷歌翻译
导航动态环境要求机器人生成无碰撞的轨迹,并积极避免移动障碍。大多数以前的作品都基于一个单个地图表示形式(例如几何,占用率或ESDF地图)设计路径计划算法。尽管他们在静态环境中表现出成功,但由于地图表示的限制,这些方法无法同时可靠地处理静态和动态障碍。为了解决该问题,本文提出了一种利用机器人在板载视觉的基于梯度的B-Spline轨迹优化算法。深度视觉使机器人能够基于体素图以几何形式跟踪和表示动态对象。拟议的优化首先采用基于圆的指南算法,以近似避免静态障碍的成本和梯度。然后,使用视觉检测的移动对象,我们的后水平距离场同时用于防止动态碰撞。最后,采用迭代重新指导策略来生成无碰撞轨迹。仿真和物理实验证明,我们的方法可以实时运行以安全地导航动态环境。
translated by 谷歌翻译
多元时间序列的异常检测对于系统行为监测有意义。本文提出了一种基于无监督的短期和长期面具表示学习(SLMR)的异常检测方法。主要思想是分别使用多尺度的残余卷积和门控复发单元(GRU)提取多元时间序列的短期局部依赖模式和长期全球趋势模式。此外,我们的方法可以通过结合时空掩盖的自我监督表示和序列分裂来理解时间上下文和特征相关性。它认为功能的重要性是不同的,我们介绍了注意机制以调整每个功能的贡献。最后,将基于预测的模型和基于重建的模型集成在一起,以关注单时间戳预测和时间序列的潜在表示。实验表明,我们方法的性能优于三个现实世界数据集上的其他最先进的模型。进一步的分析表明,我们的方法擅长可解释性。
translated by 谷歌翻译
高速,高分辨率的立体视频(H2-STEREO)视频使我们能够在细粒度上感知动态3D内容。然而,对商品摄像机的收购H2-STEREO视频仍然具有挑战性。现有的空间超分辨率或时间框架插值方法分别提供了缺乏时间或空间细节的折衷解决方案。为了减轻这个问题,我们提出了一个双摄像头系统,其中一台相机捕获具有丰富空间细节的高空间分辨率低框架速率(HSR-LFR)视频,而另一个摄像头则捕获了低空间分辨率的高架框架-Rate(LSR-HFR)视频带有光滑的时间细节。然后,我们设计了一个学习的信息融合网络(LIFNET),该网络利用跨摄像机冗余,以增强两种相机视图,从而有效地重建H2-STEREO视频。即使在大型差异场景中,我们也利用一个差异网络将时空信息传输到视图上,基于该视图,我们建议使用差异引导的LSR-HFR视图基于差异引导的流量扭曲,并针对HSR-LFR视图进行互补的扭曲。提出了特征域中的多尺度融合方法,以最大程度地减少HSR-LFR视图中闭塞引起的翘曲幽灵和孔。 LIFNET使用YouTube收集的高质量立体视频数据集以端到端的方式进行训练。广泛的实验表明,对于合成数据和摄像头捕获的真实数据,我们的模型均优于现有的最新方法。消融研究探讨了各个方面,包括时空分辨率,摄像头基线,摄像头解理,长/短曝光和应用程序,以充分了解其对潜在应用的能力。
translated by 谷歌翻译
无教师的在线知识蒸馏(KD)旨在培训多个学生模型的合奏,并彼此提炼知识。尽管现有的在线KD方法实现了理想的性能,但它们通常专注于阶级概率作为核心知识类型,而忽略了宝贵的特征代表性信息。我们为在线KD提供了一个相互的对比学习(MCL)框架。 MCL的核心思想是以在线方式进行对比分布的相互交互和对比度分布的转移。我们的MCL可以汇总跨网络嵌入信息,并最大化两个网络之间的相互信息的下限。这使每个网络能够从他人那里学习额外的对比知识,从而提供更好的特征表示形式,从而提高视觉识别任务的性能。除最后一层外,我们还将MCL扩展到辅助特征细化模块辅助的几个中间层。这进一步增强了在线KD的表示能力。关于图像分类和转移学习到视觉识别任务的实验表明,MCL可以针对最新的在线KD方法带来一致的性能提高。优势表明,MCL可以指导网络生成更好的特征表示。我们的代码可在https://github.com/winycg/mcl上公开获取。
translated by 谷歌翻译
我们研究了离线模仿学习(IL)的问题,在该问题中,代理商旨在学习最佳的专家行为政策,而无需其他在线环境互动。取而代之的是,该代理来自次优行为的补充离线数据集。解决此问题的先前工作要么要求专家数据占据离线数据集的大部分比例,要么需要学习奖励功能并在以后执行离线加强学习(RL)。在本文中,我们旨在解决问题,而无需进行奖励学习和离线RL培训的其他步骤,当时示范包含大量次优数据。基于行为克隆(BC),我们引入了一个额外的歧视者,以区分专家和非专家数据。我们提出了一个合作框架,以增强这两个任务的学习,基于此框架,我们设计了一种新的IL算法,其中歧视者的输出是BC损失的权重。实验结果表明,与基线算法相比,我们提出的算法可获得更高的回报和更快的训练速度。
translated by 谷歌翻译